Measuring Informational Distances Between Sensors and Sensor Integration
نویسندگان
چکیده
In embodied artificial intelligence is it of interest to study the informational relationships between the agent, its actions, and the environment. This paper presents a number of statistical measures to compute the informational distance between sensors including the information metric, correlation coefficient, Hellinger distance, Kullback-Leibler, and JensenShannon divergence. The methods are compared using the sensory reconstruction method to find spatial positions of visual sensors of different modalities in a sensor integration task. The results show how the information metric can find relations not found by the other measures. Introduction In the early 1960s H. B. Barlow suggested (Barlow, 1961) that the visual system of animals “knows” about the structure of natural signals and uses this knowledge to represent visual signals. Ever since then neuroscientists have analysed the informational relationships between organisms and their environment. In recent years, with the advent of embodied artifical intelligence, there has also been an increased interest in robotics and artificial intelligence to study the informational relations between the agent, its environment, and how the actions of the agent affect its sensory input. It is believed that this research can give us new principles and quantitative measures which can be used to build robots that can exploit bootstrapping (Prince et al., 2005) and continously learn, develop, and adapt depending on their particular environment, environment, and task to perform. This paper presents some work in this area and presents a number of methods for computing the distance between sensors and how these methods can be useful for sensor integration of different sensor modalities. The informational relationships between sensors are dependent on the particular embodiment of an agent. Thus, these relationships can be useful for the agent to learn about its own body, the potential actions it can perform, and how the sensors relate to its particular environment. In (Olsson et al., 2004b) the sensory reconstruction method, first described by Pierce and Kuipers (1997), was applied to robots and extended by considering the informational relations between sensors. The results showed how the visual field could be reconstructed from raw and uninterpreted sensor data and how some symmetry of the physical body of the robot could be found in the created sensoritopic maps. This method was also used in (Olsson et al., 2005b) to show how a robot can develop from no knowledge of its sensors and actuators to perform visually guided movement. One other aspect of the information available in an agent’s sensors is that the particular actions of the agent can have an impact on the nature and statistical structure of its sensoric input. This has been studied in a number of papers since (Lungarella and Pfeifer, 2001); see for example (Sporns and Pegors, 2003, 2004; Lungarella et al., 2005). The results show how saliency guided movement decreases the entropy of the input while increasing the statistical dependencies between the sensors. The specific environment of an agent also limits in principle what an agent can know about the world and the physical and informational relationships of its sensors (Olsson et al., 2004a). Information-theoretic measures have also been used to classify behaviour and interactions with the environment using raw and uninterpreted sensor data from the agent. In (Tarapore et al., 2004) the statistical structure of the sensoric input was used to fingerprint interactions and environments. Mirza et al. (2005b) considered how the informational relationships between its sensors, as well as actuators, can be used to build histories of interaction by classifying trajectories in the sensorimotor phase space. In (Kaplan and Hafner, 2005) the authors also considered clustering behaviours by the informational distances between sensors by considering configurations of matrices of information distances between all pairs of sensors. One important issue in this research is what measures to use to quantify the informational relationships. In (Lungarella et al., 2005) the authors present a number of methods for quantifying informational structure in sensor and motor data. The focus is on integration, i.e., how much information two or more sources have in common. In this paper we focus on the opposite, i.e., how to compute how different two or more sources are. Following (Olsson et al., 2004b), several papers including (Olsson et al., 2004a, 2005b,c,a, 2006; Mirza et al., 2005a,b; Kaplan and Hafner, 2005; Hafner and Kaplan, 2005) have used the information distance metric disucssed by Crutchfield (1990) to compute the informational distance between sensors. An important question the authors have received several times in reviews of papers and in discussions is “why the information metric?”. This is a good question and in this paper we present a number of alternative distance measures suggested by colleagues and reviewers as well as the information metric. To compare the potential utility of the methods we apply them as the distance measure used in the sensory reconstruction method (Pierce and Kuipers, 1997; Olsson et al., 2004b). In the experiment the sensors of the visual field of a robot is split into three different modalities: red, blue, and green, and the problem is to find the relationships between sensors, including which sensors come from the same pixel in the camera. This is an example of sensor integration. The results show how the information metric performs better in this problem as it measures both linear as well as non-linear relationships between sensors. The rest of this paper is structured as follows. The next section presents a number of methods to compute the distance between two sensors. Then a short introduction to the sensory reconstruction method is given before the results of the experiments are presented. The final section concludes the paper. Measuring the Distance Between Sensors In this section we present a number of methods for computing the distance between two sensors Sx and Sy. Each sensor can assume one of a discrete number of values (continuous values are discretized) Sx ∈ X at each time step t where X is the alphabet of possible values. Thus, each sensor can be viewed as a time series of data {S1 x ,S2 x , . . . ,ST x } with T elements. Each sensor can also be viewed as a random variable X drawn from a particular probability distribution px(x), where px(x) is estimated from the time series of data. Similarly the joint probability distribution px,y(x,y) is estimated from the sensors Sx and Sy. A distance measure d(X ,Y ) is a distance function on a set of points, mapping pairs of points (X ,Y ) to non-negative real numbers. A distance metric in the mathematical sense also needs to satisfy the three following properties: • d(X ,Y ) = d(Y,X) (Symmetry). • d(X ,Y ) = 0 iff Y = X (Equivalence). • d(X ,Z) ≤ d(X ,Y )+d(Y,Z). (Triangle Inequality). If (2) fails but (1) and (3) hold, then we have a pseudometric, from which one canonically obtains a metric by identifying points at distance zero from each other. This is done here and in (Crutchfield, 1990). Why can it be useful to use distance measures which are metrics in the mathematical sense? If a space of information sources has a metric, is it possible to use some of the tools and terminology of geometry. It might also be useful to be able to talk about sensors in terms of spatial relationships. This might be of special importance if the computations are used to actually discover some physical structure or spatial relationships of the sensors, for example as in (Olsson et al., 2004b), where the spatial layout of visual sensors as well as some physical symmetry of a robot was found by information theoretic means. Distance Measures The 1-norm distance used in (Pierce and Kuipers, 1997) is different from the distance measures that follows in that it does not take in to account the probabilites of the different values that a sensor can take. It is normalized between 0.0 and 1.0 and is defined as
منابع مشابه
Leach Routing Algorithm Optimization through Imperialist Approach
Routing is an important challenge in WSN due to the presence of hundreds or thousands of sensor nodes. Low Energy Adaptive Clustering Hierarchy (LEACH) is a hierarchical routing and data dissemination protocol. LEACH divides a network domain into several sub-domains that are called clusters. Non-uniformity of cluster distribution and CHs selection without considering the positions of other sens...
متن کاملGlucose-Sensitive Holographic (Bio)Sensors: Fundamentals and Applications
Nowadays sensors and especially biosensors play an important role in medical diagnosis and detection of food and environment contaminations. Biosensors’ facilities have been improved significantly by using technologies such as surface plasmon resonance, microfluidics, laser, and electrochemistry. These technologies are now available on chips in micro- and nano-scale and are capable of mea...
متن کاملDiscovering Motion Flow by Temporal-Informational Correlations in Sensors
A method is presented for adapting the sensors of a robot to its current environment and to learn motion flow detection by observing the informational relations between sensors and actuators. Experiments are performed where the robot learns to detect motion flow and perform simple motion flow tracking starting from unstructured sensory input.
متن کاملSpecialized detectors for polarized skylight in the insect retina
Insect navigation relies on path integration, a procedure by which information about compass bearings pursued and distances travelled are combined to calculate position. Three neural levels of the polarization compass, which uses the polarization of skylight as a reference, have been analyzed in orthopteran insects. A group of dorsally directed, highly specialized ommatidia serve as polarizatio...
متن کاملA Novel Detection Method for Underwater Moving Targets by Measuring Their ELF Emissions with Inductive Sensors
In this article, we propose a novel detection method for underwater moving targets by detecting their extremely low frequency (ELF) emissions with inductive sensors. The ELF field source of the targets is modeled by a horizontal electric dipole at distances more than several times of the targets' length. The formulas for the fields produced in air are derived with a three-layer model (air, seaw...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2006